无监督和半监督的ML方法,例如变异自动编码器(VAE),由于其在分离的表述方面的能力以及找到具有复杂实验数据的潜在分类和回归的能力,因此在多个物理,化学和材料科学方面已广泛采用。 。像其他ML问题一样,VAE需要高参数调整,例如,平衡Kullback Leibler(KL)和重建项。但是,训练过程以及由此产生的歧管拓扑和连通性不仅取决于超参数,还取决于训练过程中的演变。由于在高维超参数空间中详尽搜索的效率低下,因此我们在这里探索了一种潜在的贝叶斯优化方法(ZBO)方法,用于用于无监督和半监测的ML的超参数轨迹优化,并证明了连接的ML,并证明VAE具有旋转不变。我们证明了这种方法的应用,用于寻找血浆纳米颗粒材料系统的MNIST和实验数据的联合离散和连续旋转不变表示。已广泛讨论了所提出的方法的性能,它允许对其他ML模型进行任何高维超参数调整或轨迹优化。
translated by 谷歌翻译
机器学习方法的最新进展以及扫描探针显微镜(SPMS)的可编程接口的新兴可用性使自动化和自动显微镜在科学界的关注方面推向了最前沿。但是,启用自动显微镜需要开发特定于任务的机器学习方法,了解物理发现与机器学习之间的相互作用以及完全定义的发现工作流程。反过来,这需要平衡领域科学家的身体直觉和先验知识与定义实验目标和机器学习算法的奖励,这些算法可以将它们转化为特定的实验协议。在这里,我们讨论了贝叶斯活跃学习的基本原理,并说明了其对SPM的应用。我们从高斯过程作为一种简单的数据驱动方法和对物理模型的贝叶斯推断作为基于物理功能的扩展的贝叶斯推断,再到更复杂的深内核学习方法,结构化的高斯过程和假设学习。这些框架允许使用先验数据,在光谱数据中编码的特定功能以及在实验过程中表现出的物理定律的探索。讨论的框架可以普遍应用于结合成像和光谱,SPM方法,纳米识别,电子显微镜和光谱法以及化学成像方法的所有技术,并且对破坏性或不可逆测量的影响特别影响。
translated by 谷歌翻译
We investigate a model for image/video quality assessment based on building a set of codevectors representing in a sense some basic properties of images, similar to well-known CORNIA model. We analyze the codebook building method and propose some modifications for it. Also the algorithm is investigated from the point of inference time reduction. Both natural and synthetic images are used for building codebooks and some analysis of synthetic images used for codebooks is provided. It is demonstrated the results on quality assessment may be improves with the use if synthetic images for codebook construction. We also demonstrate regimes of the algorithm in which real time execution on CPU is possible for sufficiently high correlations with mean opinion score (MOS). Various pooling strategies are considered as well as the problem of metric sensitivity to bitrate.
translated by 谷歌翻译
The body of research on classification of solar panel arrays from aerial imagery is increasing, yet there are still not many public benchmark datasets. This paper introduces two novel benchmark datasets for classifying and localizing solar panel arrays in Denmark: A human annotated dataset for classification and segmentation, as well as a classification dataset acquired using self-reported data from the Danish national building registry. We explore the performance of prior works on the new benchmark dataset, and present results after fine-tuning models using a similar approach as recent works. Furthermore, we train models of newer architectures and provide benchmark baselines to our datasets in several scenarios. We believe the release of these datasets may improve future research in both local and global geospatial domains for identifying and mapping of solar panel arrays from aerial imagery. The data is accessible at https://osf.io/aj539/.
translated by 谷歌翻译
There has been a great deal of recent interest in learning and approximation of functions that can be expressed as expectations of a given nonlinearity with respect to its random internal parameters. Examples of such representations include "infinitely wide" neural nets, where the underlying nonlinearity is given by the activation function of an individual neuron. In this paper, we bring this perspective to function representation by neural stochastic differential equations (SDEs). A neural SDE is an It\^o diffusion process whose drift and diffusion matrix are elements of some parametric families. We show that the ability of a neural SDE to realize nonlinear functions of its initial condition can be related to the problem of optimally steering a certain deterministic dynamical system between two given points in finite time. This auxiliary system is obtained by formally replacing the Brownian motion in the SDE by a deterministic control input. We derive upper and lower bounds on the minimum control effort needed to accomplish this steering; these bounds may be of independent interest in the context of motion planning and deterministic optimal control.
translated by 谷歌翻译
Powerful hardware services and software libraries are vital tools for quickly and affordably designing, testing, and executing quantum algorithms. A robust large-scale study of how the performance of these platforms scales with the number of qubits is key to providing quantum solutions to challenging industry problems. Such an evaluation is difficult owing to the availability and price of physical quantum processing units. This work benchmarks the runtime and accuracy for a representative sample of specialized high-performance simulated and physical quantum processing units. Results show the QMware cloud computing service can reduce the runtime for executing a quantum circuit by up to 78% compared to the next fastest option for algorithms with fewer than 27 qubits. The AWS SV1 simulator offers a runtime advantage for larger circuits, up to the maximum 34 qubits available with SV1. Beyond this limit, QMware provides the ability to execute circuits as large as 40 qubits. Physical quantum devices, such as Rigetti's Aspen-M2, can provide an exponential runtime advantage for circuits with more than 30. However, the high financial cost of physical quantum processing units presents a serious barrier to practical use. Moreover, of the four quantum devices tested, only IonQ's Harmony achieves high fidelity with more than four qubits. This study paves the way to understanding the optimal combination of available software and hardware for executing practical quantum algorithms.
translated by 谷歌翻译
冠心病(CHD)是现代世界中死亡的主要原因。用于诊断和治疗CHD的现代分析工具的开发正在从科学界受到极大的关注。基于深度学习的算法,例如分割网络和检测器,通过及时分析患者的血管造影来协助医疗专业人员,在协助医疗专业人员方面发挥着重要作用。本文着重于X射线冠状动脉造影(XCA),该血管造影被认为是CHD诊断和治疗中的“黄金标准”。首先,我们描述了XCA图像的公开可用数据集。然后,审查了图像预处理的经典和现代技术。此外,讨论了共同的框架选择技术,这是输入质量以及模型性能的重要因素。在以下两章中,我们讨论了现代血管分割和狭窄检测网络,最后是当前最新技术的开放问题和当前局限性。
translated by 谷歌翻译
在现实世界条件下运行的原因是由于部分可观察性引起的广泛故障而具有挑战性。在相对良性的环境中,可以通过重试或执行少量手工恢复策略之一来克服这种失败。相比之下,诸如打开门和组装家具之类的接触式连续操作任务不适合详尽的手工设计。为了解决这个问题,我们提出了一种以样本效率的方式来鲁棒化操作策略的一般方法。我们的方法通过在模拟中探索发现当前策略的故障模式,从而提高了鲁棒性,然后学习其他恢复技能来处理这些失败。为了确保有效的学习,我们提出了一种在线算法值上限限制(值UCL),该算法选择要优先级的故障模式以及要恢复到哪种状态,以使预期的性能在每个培训情节中最大程度地提高。我们使用我们的方法来学习开门的恢复技能,并在模拟和实际机器人中对其进行评估。与开环执行相比,我们的实验表明,即使是有限的恢复学习也可以从模拟中的71 \%提高到92.4 \%,从75 \%到90 \%的实际机器人。
translated by 谷歌翻译
深度学习归一化技术的基本特性,例如批准归一化,正在使范围前的参数量表不变。此类参数的固有域是单位球,因此可以通过球形优化的梯度优化动力学以不同的有效学习率(ELR)来表示,这是先前研究的。在这项工作中,我们使用固定的ELR直接研究了训练量表不变的神经网络的特性。我们根据ELR值发现了这种训练的三个方案:收敛,混乱平衡和差异。我们详细研究了这些制度示例的理论检查,以及对真实规模不变深度学习模型的彻底经验分析。每个制度都有独特的特征,并反映了内在损失格局的特定特性,其中一些与先前对常规和规模不变的神经网络培训的研究相似。最后,我们证明了如何在归一化网络的常规培训以及如何利用它们以实现更好的Optima中反映发现的制度。
translated by 谷歌翻译
现代基于深度学习的系统的性能极大地取决于输入对象的质量。例如,对于模糊或损坏的输入,面部识别质量将较低。但是,在更复杂的情况下,很难预测输入质量对所得准确性的影响。我们提出了一种深度度量学习的方法,该方法允许直接估算不确定性,几乎没有额外的计算成本。开发的\ textit {scaleface}算法使用可训练的比例值,以修改嵌入式空间中的相似性。这些依赖于输入的量表值代表了对识别结果的信心的度量,从而允许估计不确定性。我们提供了有关面部识别任务的全面实验,这些实验表明与其他不确定性感知的面部识别方法相比,比例表面的表现出色。我们还将结果扩展到了文本到图像检索的任务,表明所提出的方法以显着的利润击败了竞争对手。
translated by 谷歌翻译